Large deviations in the perceptron model and consequences for active learning

نویسندگان

چکیده

Active learning (AL) is a branch of machine that deals with problems where unlabeled data abundant yet obtaining labels expensive. The algorithm has the possibility querying limited number samples to obtain corresponding labels, subsequently used for supervised learning. In this work, we consider task choosing subset be labeled from fixed finite pool samples. We assume random matrix and ground truth generated by single-layer teacher neural network. employ replica methods analyze large deviations accuracy achieved after on original pool. These then provide optimal achievable performance boundaries any AL algorithm. show can efficiently approached simple message-passing algorithms. also comparison some other popular active strategies.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

the search for the self in becketts theatre: waiting for godot and endgame

this thesis is based upon the works of samuel beckett. one of the greatest writers of contemporary literature. here, i have tried to focus on one of the main themes in becketts works: the search for the real "me" or the real self, which is not only a problem to be solved for beckett man but also for each of us. i have tried to show becketts techniques in approaching this unattainable goal, base...

15 صفحه اول

Active Learning with Perceptron for Structured Output

Typically, structured output scenarios are characterized by a high cost associated with obtaining supervised training data, motivating the study of active learning protocols for these situations. Starting with active learning approaches for multiclass classification, we first design querying functions for selecting entire structured instances, exploring the tradeoff between selecting instances ...

متن کامل

Learning, Large Deviations and Rare Events∗

We examine the role of generalized constant gain stochastic gradient (SGCG) learning in generating large deviations of an endogenous variable from its rational expectations value. We show analytically that these large deviations can occur with a frequency associated with a fat tailed distribution even though the model is driven by thin tailed exogenous stochastic processes. We characterize thes...

متن کامل

Analysis of Perceptron-Based Active Learning

We start by showing that in an active learning setting, the Perceptron algorithm needs Ω( 1 2 ) labels to learn linear separators within generalization error . We then present a simple selective sampling algorithm for this problem, which combines a modification of the perceptron update with an adaptive filtering rule for deciding which points to query. For data distributed uniformly over the un...

متن کامل

Active learning methods: a way of tackling large classroom setting

امروزه اساتید دانشگاه با مشکل تعداد زیاد فراگیران در کلاس های درس مواجه هستند. لذا برای ارائه مطالب درسی از سخنرانی برای حل این مشکل استفاده می کنند. اگر چه سخنرانی یکی از راه های حل این مشکل می باشد اما منجر به یادگیری عمیق مطالب درسی نمی شود. در مقابل سازنده گراها استفاده از روش های یادگیری فعال را برای استفاده در کلاس های درس بزرگ پیشنهاد می کنند. بنابراین هدف این مطالعه شناسایی روش های یادگ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Machine learning: science and technology

سال: 2021

ISSN: ['2632-2153']

DOI: https://doi.org/10.1088/2632-2153/abfbbb